Model Poisoning, Adversarial Examples, Prompt Injection, AI Safety
AI Safety at the Frontier: Paper Highlights, July '25
lesswrong.com·15h
Securing AI requires life cycle thinking and reducing unintended consequences (WHY2025)
cdn.media.ccc.de·4h
What I'm reading (#2): More on Kimi K2, how to build a bad research center, Pretraining with RL, and sporks of AGI
interconnects.ai·14h
The Lie of AI
around.com·16h
MCP vs A2A - A Complete Deep Dive
hackernoon.com·10h
GPT-5 prompting guide
cookbook.openai.com·22h
how to effortlessly become the best student in your school using AI:
threadreaderapp.com·14h
OpenHands ZombAI Exploit: Prompt Injection To Remote Code Execution
embracethered.com·16h
Large Language Models Are Designed to Be Average
spin.atomicobject.com·15h
A Thing or Two About RSA
nflatrea.bearblog.dev·5h
0click Enterprise compromise – thank you, AI! (WHY2025)
cdn.media.ccc.de·16h
🎲 Use LLMs to use chess engines, not to play chess
edjohnsonwilliams.co.uk·1h
Loading...Loading more...